Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
Ear Hear ; 45(1): 164-173, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37491715

RESUMEN

OBJECTIVES: Speech perception training can be a highly effective intervention to improve perception and language abilities in children who are deaf or hard of hearing. Most studies of speech perception training, however, only measure gains immediately following training. Only a minority of cases include a follow-up assessment after a period without training. A critical unanswered question was whether training-related benefits are retained for a period of time after training has stopped. A primary goal of this investigation was to determine whether children retained training-related benefits 4 to 6 weeks after they completed 16 hours of formal speech perception training. Training was comprised of either auditory or speechreading training, or a combination of both. Also important is to determine if "booster" training can help increase gains made during the initial intensive training period. Another goal of the study was to investigate the benefits of providing home-based booster training during the 4- to 6-week interval after the formal training ceased. The original investigation ( Tye-Murray et al. 2022 ) compared the effects of talker familiarity and the relative benefits of the different types of training. We predicted that the children who received no additional training would retain the gains after the completing the formal training. We also predicted that those children who completed the booster training would realize additional gains. DESIGN: Children, 6 to 12 years old, with hearing loss who had previously participated in the original randomized control study returned 4 to 6 weeks after the conclusion to take a follow-up speech perception assessment. The first group (n = 44) returned after receiving no formal intervention from the research team before the follow-up assessment. A second group of 40 children completed an additional 16 hours of speech perception training at home during a 4- to 6-week interval before the follow-up speech perception assessment. The home-based speech perception training was a continuation of the same training that was received in the laboratory formatted to work on a PC tablet with a portable speaker. The follow-up speech perception assessment included measures of listening and speechreading, with test items spoken by both familiar (trained) and unfamiliar (untrained) talkers. RESULTS: In the group that did not receive the booster training, follow-up testing showed retention for all gains that were obtained immediately following the laboratory-based training. The group that received booster training during the same interval also maintained the benefits from the formal training, with some indication of minor improvement. CONCLUSIONS: Clinically, the present findings are extremely encouraging; the group that did not receive home-based booster training retained the benefits obtained during the laboratory-based training regimen. Moreover, the results suggest that self-paced booster training maintained the relative training gains associated with talker familiarity and training type seen immediately following laboratory-based training. Future aural rehabilitation programs should include maintenance training at home to supplement the speech perception training conducted under more formal conditions at school or in the clinic.


Asunto(s)
Corrección de Deficiencia Auditiva , Sordera , Pérdida Auditiva , Percepción del Habla , Niño , Humanos , Pérdida Auditiva/rehabilitación , Lectura de los Labios , Corrección de Deficiencia Auditiva/métodos
2.
Am J Audiol ; 31(3S): 905-913, 2022 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-36037482

RESUMEN

PURPOSE: A digital therapeutic is a software-based intervention for a disease and/or disorder and often includes a daily, interactive curriculum and exercises; online support from a professional versed in the treatment base; and an online support community, typically active as a social chat group. Recently, the Consumer Technology Association published revised standards for digital therapeutics (DTx) that stipulate that a DTx must be evidence based and founded in scientific evidence showing effectiveness and must be supported by evidence showing improved patient satisfaction and adherence to an intervention. The purpose of this study was to investigate whether a DTx could help older adults better adjust to their hearing loss and acclimate to new hearing aids. METHOD: Thirty older adults with mild or moderate hearing loss who had never used hearing aids participated. All hearing aids were fitted remotely. Participants used a hearing health care DTx (Amptify) for 4 weeks, either immediately following receipt of the hearing aids or 4 weeks after the fitting. A control condition was watching closed caption television. Participants completed a satisfaction questionnaire that queried about their impressions of the DTx, which had items that included both a rating scale of 1-7 and open-ended questions. RESULTS: Ninety-six percent of the participants reported positive benefits, and one-half reported that the DTx helped them to adjust to their new hearing aids. They assigned a score of 5.8 to one of the questionnaire items that is similar to a Net Promoter Score Benefits, which included an enhanced ability to engage in conversation and increased listening confidence. CONCLUSION: This investigation provides scientific evidence to support the use of a hearing health care DTx, paving the way for audiologists to be able to more easily and efficiently incorporate follow-up aural rehabilitation into their routine clinical services and to be able to provide services remotely.


Asunto(s)
Audífonos , Pérdida Auditiva , Anciano , Audición , Pérdida Auditiva/rehabilitación , Humanos , Midazolam , Satisfacción del Paciente
3.
Ear Hear ; 43(1): 181-191, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34225318

RESUMEN

OBJECTIVES: Transfer appropriate processing (TAP) refers to a general finding that training gains are maximized when training and testing are conducted under the same conditions. The present study tested the extent to which TAP applies to speech perception training in children with hearing loss. Specifically, we assessed the benefits of computer-based speech perception training games for enhancing children's speech recognition by comparing three training groups: auditory training (AT), audiovisual training (AVT), and a combination of these two (AT/AVT). We also determined whether talker-specific training, as might occur when children train with the speech of a next year's classroom teacher, leads to better recognition of that talker's speech and if so, the extent to which training benefits generalize to untrained talkers. Consistent with TAP theory, we predicted that children would improve their ability to recognize the speech of the trained talker more than that of three untrained talkers and, depending on their training group, would improve more on an auditory-only (listening) or audiovisual (speechreading) speech perception assessment, that matched the type of training they received. We also hypothesized that benefit would generalize to untrained talkers and to test modalities in which they did not train, albeit to a lesser extent. DESIGN: Ninety-nine elementary school aged children with hearing loss were enrolled into a randomized control trial with a repeated measures A-A-B experimental mixed design in which children served as their own control for the assessment of overall benefit of a particular training type and three different groups of children yielded data for comparing the three types of training. We also assessed talker-specific learning and transfer of learning by including speech perception tests with stimuli spoken by the talker with whom a child trained and stimuli spoken by three talkers with whom the child did not train and by including speech perception tests that presented both auditory (listening) and audiovisual (speechreading) stimuli. Children received 16 hr of gamified training. The games provided word identification and connected speech comprehension training activities. RESULTS: Overall, children showed significant improvement in both their listening and speechreading performance. Consistent with TAP theory, children improved more on their trained talker than on the untrained talkers. Also consistent with TAP theory, the children who received AT improved more on the listening than the speechreading. However, children who received AVT improved on both types of assessment equally, which is not consistent with our predictions derived from a TAP perspective. Age, language level, and phonological awareness were either not predictive of training benefits or only negligibly so. CONCLUSIONS: The findings provide support for the practice of providing children who have hearing loss with structured speech perception training and suggest that future aural rehabilitation programs might include teacher-specific speech perception training to prepare children for an upcoming school year, especially since training will generalize to other talkers. The results also suggest that benefits of speech perception training were not significantly related to age, language level, or degree of phonological awareness. The findings are largely consistent with TAP theory, suggesting that the more aligned a training task is with the desired outcome, the more likely benefit will accrue.


Asunto(s)
Sordera , Pérdida Auditiva , Percepción del Habla , Niño , Computadores , Humanos , Lectura de los Labios , Habla
4.
J Neurosci ; 42(3): 435-442, 2022 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-34815317

RESUMEN

In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.SIGNIFICANCE STATEMENT In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is hard to understand (e.g., background noise). Prior work has suggested that specialized regions of the brain may play a critical role in integrating information from visual and auditory speech. Here, we show a complementary mechanism relying on synchronized brain activity among sensory and motor regions may also play a critical role. These findings encourage reconceptualizing audiovisual integration in the context of coordinated network activity.


Asunto(s)
Corteza Auditiva/fisiología , Lenguaje , Lectura de los Labios , Red Nerviosa/fisiología , Percepción del Habla/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Adulto , Anciano , Anciano de 80 o más Años , Corteza Auditiva/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Corteza Visual/diagnóstico por imagen , Adulto Joven
5.
Lang Speech Hear Serv Sch ; 52(4): 1049-1060, 2021 10 18.
Artículo en Inglés | MEDLINE | ID: mdl-34403290

RESUMEN

Purpose A meaning-oriented auditory training program for children who are deaf or hard of hearing (d/hh) was assessed with regard to its efficacy in promoting novel word learning. Method While administering the auditory training program, one of the authors (Elizabeth Mauzé) observed that children were learning words they previously did not know. Therefore, we systematically assessed vocabulary gains among 16 children. Most completed pretest, posttest, and retention versions of a picture-naming task in which they attempted to verbally identify 199 color pictures of words that would appear during training. Posttest and retention versions included both pictures used and not used during training in order to test generalization of associations between words and their referents. Importantly, each training session involved meaning-oriented, albeit simple, activities/games on a computer. Results At posttest, the percentage of word gain was 27.3% (SD = 12.5; confidence interval [CI] of the mean: 24.2-30.4) using trained pictures as cues and 25.9% (CI of the mean: 22.9-29.0) using untrained pictures as cues. An analysis of retention scores (for 13 of the participants who completed it weeks later) indicated strikingly high levels of retention for the words that had been learned. Conclusions These findings favor auditory training that is meaning oriented when it comes to the acquisition of different linguistic subsystems, lexis in this case. We also expand the discussion to include other evidence-based recommendations regarding how vocabulary is presented (input-based effects) and what learners are asked to do (task-based effects) as part of an overall effort to help children who are d/hh increase their vocabulary knowledge.


Asunto(s)
Pérdida Auditiva , Vocabulario , Niño , Audición , Pérdida Auditiva/terapia , Humanos , Lingüística , Aprendizaje Verbal
6.
Ear Hear ; 42(6): 1656-1667, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34320527

RESUMEN

OBJECTIVE: Spoken communication is better when one can see as well as hear the talker. Although age-related deficits in speech perception were observed, Tye-Murray and colleagues found that even when age-related deficits in audiovisual (AV) speech perception were observed, AV performance could be accurately predicted from auditory-only (A-only) and visual-only (V-only) performance, and that knowing individuals' ages did not increase the accuracy of prediction. This finding contradicts conventional wisdom, according to which age-related differences in AV speech perception are due to deficits in the integration of auditory and visual information, and our primary goal was to determine whether Tye-Murray et al.'s finding with a closed-set test generalizes to situations more like those in everyday life. A second goal was to test a new predictive model that has important implications for audiological assessment. DESIGN: Participants (N = 109; ages 22-93 years), previously studied by Tye-Murray et al., were administered our new, open-set Lex-List test to assess their auditory, visual, and audiovisual perception of individual words. All testing was conducted in six-talker babble (three males and three females) presented at approximately 62 dB SPL. The level of the audio for the Lex-List items, when presented, was approximately 59 dB SPL because pilot testing suggested that this signal-to-noise ratio would avoid ceiling performance under the AV condition. RESULTS: Multiple linear regression analyses revealed that A-only and V-only performance accounted for 87.9% of the variance in AV speech perception, and that the contribution of age failed to reach significance. Our new parabolic model accounted for even more (92.8%) of the variance in AV performance, and again, the contribution of age was not significant. Bayesian analyses revealed that for both linear and parabolic models, the present data were almost 10 times as likely to occur with a reduced model (without age) than with a full model (with age as a predictor). Furthermore, comparison of the two reduced models revealed that the data were more than 100 times as likely to occur with the parabolic model than with the linear regression model. CONCLUSIONS: The present results strongly support Tye-Murray et al.'s hypothesis that AV performance can be accurately predicted from unimodal performance and that knowing individuals' ages does not increase the accuracy of that prediction. Our results represent an important initial step in extending Tye-Murray et al.'s findings to situations more like those encountered in everyday communication. The accuracy with which speech perception was predicted in this study foreshadows a form of precision audiology in which determining individual strengths and weaknesses in unimodal and multimodal speech perception facilitates identification of targets for rehabilitative efforts aimed at recovering and maintaining speech perception abilities critical to the quality of an older adult's life.


Asunto(s)
Audiología , Percepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Teorema de Bayes , Femenino , Audición , Humanos , Masculino , Persona de Mediana Edad , Ruido , Percepción Visual , Adulto Joven
7.
J Neurosci Res ; 98(9): 1800-1814, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32415883

RESUMEN

Deleterious age-related changes in the central auditory nervous system have been referred to as central age-related hearing impairment (ARHI) or central presbycusis. Central ARHI is often assumed to be the consequence of peripheral ARHI. However, it is possible that certain aspects of central ARHI are independent from peripheral ARHI. A confirmation of this possibility could lead to significant improvements in current rehabilitation practices. The major difficulty in addressing this issue arises from confounding factors, such as other age-related changes in both the cochlea and central non-auditory brain structures. Because gap detection is a common measure of central auditory temporal processing, and gap detection thresholds are less influenced by changes in other brain functions such as learning and memory, we investigated the potential relationship between age-related peripheral hearing loss (i.e., audiograms) and age-related changes in gap detection. Consistent with previous studies, a significant difference was found for gap detection thresholds between young and older adults. However, among older adults, no significant associations were observed between gap detection ability and several other independent variables including the pure tone audiogram average, the Wechsler Adult Intelligence Scale-Vocabulary score, gender, and age. Statistical analyses showed little or no contributions from these independent variables to gap detection thresholds. Thus, our data indicate that age-related decline in central temporal processing is largely independent of peripheral ARHI.


Asunto(s)
Percepción Auditiva/fisiología , Pérdida Auditiva Central/fisiopatología , Presbiacusia/fisiopatología , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Envejecimiento/fisiología , Umbral Auditivo , Cóclea/fisiopatología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
8.
Ear Hear ; 41(3): 549-560, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31453875

RESUMEN

OBJECTIVES: This study was designed to examine how speaking rate affects auditory-only, visual-only, and auditory-visual speech perception across the adult lifespan. In addition, the study examined the extent to which unimodal (auditory-only and visual-only) performance predicts auditory-visual performance across a range of speaking rates. The authors hypothesized significant Age × Rate interactions in all three modalities and that unimodal performance would account for a majority of the variance in auditory-visual speech perception for speaking rates that are both slower and faster than normal. DESIGN: Participants (N = 145), ranging in age from 22 to 92, were tested in conditions with auditory-only, visual-only, and auditory-visual presentations using a closed-set speech perception test. Five different speaking rates were presented in each modality: an unmodified (normal rate), two rates that were slower than normal, and two rates that were faster than normal. Signal to noise ratios were set individually to produce approximately 30% correct identification in the auditory-only condition and this signal to noise ratio was used in the auditory-only and auditory-visual conditions. RESULTS: Age × Rate interactions were observed for the fastest speaking rates in both the visual-only and auditory-visual conditions. Unimodal performance accounted for at least 60% of the variance in auditory-visual performance for all five speaking rates. CONCLUSIONS: The findings demonstrate that the disproportionate difficulty that older adults have with rapid speech for auditory-only presentations can also be observed with visual-only and auditory-visual presentations. Taken together, the present analyses of age and individual differences indicate a generalized age-related decline in the ability to understand speech produced at fast speaking rates. The finding that auditory-visual speech performance was almost entirely predicted by unimodal performance across all five speaking rates has important clinical implications for auditory-visual speech perception and the ability of older adults to use visual speech information to compensate for age-related hearing loss.


Asunto(s)
Percepción del Habla , Estimulación Acústica , Anciano , Percepción Auditiva , Humanos , Habla , Percepción Visual
9.
J Speech Lang Hear Res ; 60(8): 2337-2345, 2017 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-28787475

RESUMEN

Purpose: The spacing effect in human memory research refers to situations in which people learn items better when they study items in spaced intervals rather than massed intervals. This investigation was conducted to compare the efficacy of meaning-oriented auditory training when administered with a spaced versus massed practice schedule. Method: Forty-seven adult hearing aid users received 16 hr of auditory training. Participants in a spaced group (mean age = 64.6 years, SD = 14.7) trained twice per week, and participants in a massed group (mean age = 69.6 years, SD = 17.5) trained for 5 consecutive days each week. Participants completed speech perception tests before training, immediately following training, and then 3 months later. In line with transfer appropriate processing theory, tests assessed both trained tasks and an untrained task. Results: Auditory training improved the speech recognition performance of participants in both groups. Benefits were maintained for 3 months. No effect of practice schedule was found on overall benefits achieved, on retention of benefits, nor on generalizability of benefits to nontrained tasks. Conclusion: The lack of spacing effect in otherwise effective auditory training suggests that perceptual learning may be subject to different influences than are other types of learning, such as vocabulary learning. Hence, clinicians might have latitude in recommending training schedules to accommodate patients' schedules.


Asunto(s)
Percepción Auditiva , Pérdida Auditiva Sensorineural/terapia , Práctica Psicológica , Anciano , Femenino , Audífonos , Pérdida Auditiva Sensorineural/psicología , Humanos , Masculino , Persona de Mediana Edad , Resultado del Tratamiento
10.
J Child Lang ; 44(1): 185-215, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26752548

RESUMEN

Adults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: -b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children - like adults - perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception.


Asunto(s)
Lectura de los Labios , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Percepción Auditiva/fisiología , Niño , Preescolar , Femenino , Humanos , Masculino , Habla
11.
J Speech Lang Hear Res ; 59(4): 862-70, 2016 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-27567015

RESUMEN

PURPOSE: This investigation focused on generalization of outcomes for auditory training by examining the effects of task and/or talker overlap between training and at test. METHOD: Adults with hearing loss completed 12 hr of meaning-oriented auditory training and were placed in a group that trained on either multiple talkers or a single talker. A control group also completed 12 hr of training in American Sign Language. The experimental group's training included a 4-choice discrimination task but not an open-set sentence test. The assessment phase included the same 4-choice discrimination task and an open-set sentence test, the Iowa Sentences Test (Tyler, Preece, & Tye-Murray, 1986). RESULTS: Improvement on 4-choice discrimination was observed in the experimental group as compared with the control group. Gains were (a) highest when the task and talker were the same between training and assessment; (b) second highest when the task was the same but the talker only partially so; and (c) third highest when task and talker were different. CONCLUSIONS: The findings support applications of transfer-appropriate processing to auditory training and favor tailoring programs toward the specific needs of the individuals being trained for tasks, talkers, and perhaps, for stimuli, in addition to other factors.


Asunto(s)
Pérdida Auditiva/rehabilitación , Lengua de Signos , Percepción del Habla , Anciano , Análisis de Varianza , Discriminación en Psicología , Femenino , Pérdida Auditiva/psicología , Humanos , Masculino , Pruebas de Discriminación del Habla , Resultado del Tratamiento
12.
J Speech Lang Hear Res ; 59(4): 871-5, 2016 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-27567016

RESUMEN

PURPOSE: Individuals with hearing loss engage in auditory training to improve their speech recognition. They typically practice listening to utterances spoken by unfamiliar talkers but never to utterances spoken by their most frequent communication partner (FCP)-speech they most likely desire to recognize-under the assumption that familiarity with the FCP's speech limits potential gains. This study determined whether auditory training with the speech of an individual's FCP, in this case their spouse, would lead to enhanced recognition of their spouse's speech. METHOD: Ten couples completed a 6-week computerized auditory training program in which the spouse recorded the stimuli and the participant (partner with hearing loss) completed auditory training that presented recordings of their spouse. RESULTS: Training led participants to better discriminate their FCP's speech. Responses on the Client Oriented Scale of Improvement (Dillon, James, & Ginis, 1997) indicated subjectively that training reduced participants' communication difficulties. Peformance on a word identification task did not change. CONCLUSIONS: Results suggest that auditory training might improve the ability of older participants with hearing loss to recognize the speech of their spouse and might improve communication interactions between couples. The results support a task-appropriate processing framework of learning, which assumes that human learning depends on the degree of similarity between training tasks and desired outcomes.


Asunto(s)
Comunicación , Pérdida Auditiva/rehabilitación , Esposos , Anciano , Discriminación en Psicología , Femenino , Humanos , Masculino , Esposos/psicología , Resultado del Tratamiento
13.
Ear Hear ; 37(6): 623-633, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27438867

RESUMEN

OBJECTIVES: This research determined (1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] versus auditory), fidelity (intact versus nonintact auditory onsets), and lexical status (words versus nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) versus children with normal hearing (CNH) and (2) how the degree of HI, auditory word recognition, and age influenced results in CHI. Note that the AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a nonintact onset/rhyme in the auditory track. Example stimuli for the word bag are (1) AV: intact visual (b/ag) coupled to nonintact auditory (-b/ag) and 2) auditory: static face coupled to the same nonintact auditory (-b/ag). The question was whether the intact visual speech would "restore or fill-in" the nonintact auditory speech in which case performance for the same auditory stimulus would differ depending on the presence/absence of visual speech. DESIGN: Participants were 62 CHI and 62 CNH whose ages had a group mean and group distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: (1) spoke English as a native language, (2) communicated successfully aurally/orally, and (3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multimodal picture word task. RESULTS: Both CHI and CNH showed greater phonological priming from high than low-fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI versus CNH-thus these CHI appeared to have sufficiently well-specified phonological onset representations to support priming, and visual speech did not appear to be a disproportionately important source of the CHI's phonological knowledge. Two exceptions occurred, however. First-with regard to lexical status-both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high- versus low-fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition-but not degree of HI or age-uniquely influenced phonological priming by the AV nonwords. CONCLUSIONS: With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, this research documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI.


Asunto(s)
Estimulación Acústica , Pérdida Auditiva Sensorineural/fisiopatología , Estimulación Luminosa , Memoria Implícita , Vocabulario , Adolescente , Estudios de Casos y Controles , Niño , Preescolar , Femenino , Humanos , Masculino , Fonética
14.
Psychol Aging ; 31(4): 380-9, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-27294718

RESUMEN

In this study of visual (V-only) and audiovisual (AV) speech recognition in adults aged 22-92 years, the rate of age-related decrease in V-only performance was more than twice that in AV performance. Both auditory-only (A-only) and V-only performance were significant predictors of AV speech recognition, but age did not account for additional (unique) variance. Blurring the visual speech signal decreased speech recognition, and in AV conditions involving stimuli associated with equivalent unimodal performance for each participant, speech recognition remained constant from 22 to 92 years of age. Finally, principal components analysis revealed separate visual and auditory factors, but no evidence of an AV integration factor. Taken together, these results suggest that the benefit that comes from being able to see as well as hear a talker remains constant throughout adulthood and that changes in this AV advantage are entirely driven by age-related changes in unimodal visual and auditory speech recognition. (PsycINFO Database Record


Asunto(s)
Envejecimiento/psicología , Lectura de los Labios , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Habla , Adulto Joven
15.
Atten Percept Psychophys ; 78(1): 346-54, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26474981

RESUMEN

Whereas the energetic and informational masking effects of unintelligible babble on auditory speech recognition are well established, the present study is the first to investigate its effects on visual speech recognition. Young and older adults performed two lipreading tasks while simultaneously experiencing either quiet, speech-shaped noise, or 6-talker background babble. Both words at the end of uninformative carrier sentences and key words in everyday sentences were harder to lipread in the presence of babble than in the presence of speech-shaped noise or quiet. Contrary to the inhibitory deficit hypothesis of cognitive aging, babble had equivalent effects on young and older adults. In a follow-up experiment, neither the babble nor the speech-shaped noise stimuli interfered with performance of a face-processing task, indicating that babble selectively interferes with visual speech recognition and not with visual perception tasks per se. The present results demonstrate that babble can produce cross-modal informational masking and suggest a breakdown in audiovisual scene analysis, either because of obligatory monitoring of even uninformative speech sounds or because of obligatory efforts to integrate speech sounds even with uncorrelated mouth movements.


Asunto(s)
Envejecimiento/psicología , Lectura de los Labios , Enmascaramiento Perceptual , Percepción del Habla , Percepción Visual , Estimulación Acústica/métodos , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Ruido , Fonética , Habla , Adulto Joven
16.
Reg Anesth Pain Med ; 41(1): 65-8, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26650432

RESUMEN

BACKGROUND AND OBJECTIVES: New methods are needed to improve physicians' skill in communicating information and to enhance patients' ability to recall that information. We evaluated a real-time speech-to-text captioning system that simultaneously provided a speech-to-text record for both patient and anesthesiologist. The goals of the study were to assess hearing-impaired patients' recall of an informed consent discussion about regional anesthesia using real-time captioning and to determine whether the physicians found the system useful for monitoring their own performance. METHODS: We recorded 2 simulated informed consent encounters with hearing-impaired older adults, in which physicians described regional anesthetic procedures. The conversations were conducted with and without real-time captioning. Subsequently, the patient participants, who wore their hearing aids throughout, were tested on the material presented, and video recordings of the encounters were analyzed to determine how effectively physicians communicated with and without the captioning system. RESULTS: The anesthesiology residents provided similar information to the patient participants regardless of whether the real-time captioning system was used. Although the patients retained relatively few details regardless of the informed consent discussion, they could recall significantly more of the key points when provided with real-time captioning. CONCLUSIONS: Real-time speech-to-text captioning improved recall in hearing-impaired patients and proved useful for determining the information provided during an informed consent encounter. Real-time speech-to-text captioning could provide a method for assessing physicians' communication that could be used both for self-assessment and as an evaluative approach to training communication skills in practice settings.


Asunto(s)
Sistemas de Computación/normas , Toma de Decisiones Asistida por Computador , Pérdida Auditiva/terapia , Consentimiento Informado/normas , Participación del Paciente/métodos , Médicos/normas , Anestesia de Conducción/normas , Anestesia de Conducción/estadística & datos numéricos , Femenino , Pérdida Auditiva/psicología , Humanos , Consentimiento Informado/psicología , Masculino , Participación del Paciente/psicología , Relaciones Médico-Paciente , Médicos/psicología
17.
J Speech Lang Hear Res ; 58(3): 1093-102, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25863923

RESUMEN

PURPOSE: This study compared the use of 2 different types of contextual cues (sentence based and situation based) in 2 different modalities (visual only and auditory only). METHOD: Twenty young adults were tested with the Illustrated Sentence Test (Tye-Murray, Hale, Spehar, Myerson, & Sommers, 2014) and the Speech Perception in Noise Test (Bilger, Nuetzel, Rabinowitz, & Rzeczkowski, 1984; Kalikow, Stevens, & Elliott, 1977) in the 2 modalities. The Illustrated Sentences Test presents sentences with no context and sentences accompanied by picture-based situational context cues. The Speech Perception in Noise Test presents sentences with low sentence-based context and sentences with high sentence-based context. RESULTS: Participants benefited from both types of context and received more benefit when testing occurred in the visual-only modality than when it occurred in the auditory-only modality. Participants' use of sentence-based context did not correlate with use of situation-based context. Cue usage did not correlate between the 2 modalities. CONCLUSIONS: The ability to use contextual cues appears to be dependent on the type of cue and the presentation modality of the target word(s). In a theoretical sense, the results suggest that models of word recognition and sentence processing should incorporate the influence of multiple sources of information and recognize that the 2 types of context have different influences on speech perception. In a clinical sense, the results suggest that aural rehabilitation programs might provide training to optimize use of both kinds of contextual cues.


Asunto(s)
Lenguaje Infantil , Audífonos , Trastornos de la Audición/psicología , Lectura de los Labios , Percepción del Habla , Estimulación Acústica/métodos , Niño , Implantación Coclear , Señales (Psicología) , Discriminación en Psicología , Pruebas Auditivas , Humanos , Patrones de Reconocimiento Fisiológico , Fonética , Pruebas Psicológicas , Clase Social , Habla , Medición de la Producción del Habla
18.
Psychon Bull Rev ; 22(4): 1048-53, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25421408

RESUMEN

Individuals lip read themselves more accurately than they lip read others when only the visual speech signal is available (Tye-Murray et al., Psychonomic Bulletin & Review, 20, 115-119, 2013). This self-advantage for vision-only speech recognition is consistent with the common-coding hypothesis (Prinz, European Journal of Cognitive Psychology, 9, 129-154, 1997), which posits (1) that observing an action activates the same motor plan representation as actually performing that action and (2) that observing one's own actions activates motor plan representations more than the others' actions because of greater congruity between percepts and corresponding motor plans. The present study extends this line of research to audiovisual speech recognition by examining whether there is a self-advantage when the visual signal is added to the auditory signal under poor listening conditions. Participants were assigned to sub-groups for round-robin testing in which each participant was paired with every member of their subgroup, including themselves, serving as both talker and listener/observer. On average, the benefit participants obtained from the visual signal when they were the talker was greater than when the talker was someone else and also was greater than the benefit others obtained from observing as well as listening to them. Moreover, the self-advantage in audiovisual speech recognition was significant after statistically controlling for individual differences in both participants' ability to benefit from a visual speech signal and the extent to which their own visual speech signal benefited others. These findings are consistent with our previous finding of a self-advantage in lip reading and with the hypothesis of a common code for action perception and motor plan representation.


Asunto(s)
Lectura de los Labios , Ruido , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Percepción Auditiva , Femenino , Humanos , Masculino , Percepción de Movimiento/fisiología , Habla/fisiología , Adulto Joven
19.
Semin Hear ; 36(4): 263-72, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27587913

RESUMEN

There has been considerable interest in measuring the perceptual effort required to understand speech, as well as to identify factors that might reduce such effort. In the current study, we investigated whether, in addition to improving speech intelligibility, auditory training also could reduce perceptual or listening effort. Perceptual effort was assessed using a modified version of the n-back memory task in which participants heard lists of words presented without background noise and were asked to continually update their memory of the three most recently presented words. Perceptual effort was indexed by memory for items in the three-back position immediately before, immediately after, and 3 months after participants completed the Computerized Learning Exercises for Aural Rehabilitation (clEAR), a 12-session computerized auditory training program. Immediate posttraining measures of perceptual effort indicated that participants could remember approximately one additional word compared to pretraining. Moreover, some training gains were retained at the 3-month follow-up, as indicated by significantly greater recall for the three-back item at the 3-month measurement than at pretest. There was a small but significant correlation between gains in intelligibility and gains in perceptual effort. The findings are discussed within the framework of a limited-capacity speech perception system.

20.
J Exp Child Psychol ; 126: 295-312, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-24974346

RESUMEN

We investigated whether visual speech fills in non-intact auditory speech (excised consonant onsets) in typically developing children from 4 to 14 years of age. Stimuli with the excised auditory onsets were presented in the audiovisual (AV) and auditory-only (AO) modes. A visual speech fill-in effect occurs when listeners experience hearing the same non-intact auditory stimulus (e.g., /-b/ag) as different depending on the presence/absence of visual speech such as hearing /bag/ in the AV mode but hearing /ag/ in the AO mode. We quantified the visual speech fill-in effect by the difference in the number of correct consonant onset responses between the modes. We found that easy visual speech cues /b/ provided greater filling in than difficult cues /g/. Only older children benefited from difficult visual speech cues, whereas all children benefited from easy visual speech cues, although 4- and 5-year-olds did not benefit as much as older children. To explore task demands, we compared results on our new task with those on the McGurk task. The influence of visual speech was uniquely associated with age and vocabulary abilities for the visual speech fill--in effect but was uniquely associated with speechreading skills for the McGurk effect. This dissociation implies that visual speech--as processed by children-is a complicated and multifaceted phenomenon underpinned by heterogeneous abilities. These results emphasize that children perceive a speaker's utterance rather than the auditory stimulus per se. In children, as in adults, there is more to speech perception than meets the ear.


Asunto(s)
Lectura de los Labios , Percepción del Habla , Habla , Estimulación Acústica , Adolescente , Factores de Edad , Percepción Auditiva , Niño , Preescolar , Señales (Psicología) , Femenino , Humanos , Masculino , Fonética , Percepción Visual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...